Exact Hessian Calculation in Feedforward FIR Neural Networks

نویسندگان

  • Tomasz J. Cholewo
  • Jacek M. Zurada
چکیده

FIR neural networks are feedforward neural networks with regular scalar synapses replaced by linear finite impulse response filters. This paper introduces the Second Order Temporal Backpropagation algorithm which enables the exact calculation of the second order error derivatives for a FIR neural network. This method is based on the error gradient calculation method first proposed by Wan and referred to as Temporal Backpropagation. A reduced FIR synapse model obtained by ignoring unnecessary time lags is proposed to reduce the number of network parameters.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Feedforward and Recurrent Neural Networks Backward Propagation and Hessian in Matrix Form

In this paper we focus on the linear algebra theory behind feedforward (FNN) and recurrent (RNN) neural networks. We review backward propagation, including backward propagation through time (BPTT). Also, we obtain a new exact expression for Hessian, which represents second order effects. We show that for t time steps the weight gradient can be expressed as a rank-t matrix, while the weight Hess...

متن کامل

Exact Calculation of the Hessian Matrix for the Multilayer Perceptron

The elements of the Hessian matrix consist of the second derivatives of the error measure with respect to the weights and thresholds in the network. They are needed in Bayesian estimation of network regularization parameters, for estimation of error bars on the network outputs, for network pruning algorithms, and for fast re-training of the network following a small change in the training data....

متن کامل

Exact and Approximate Methods for Computing the Hessian of a Feedforward Artificial Neural Network

We present two optimization techniques based on cubic curve fitting; one based on function values and derivatives a t two previous points, the other based on derivatives a t three previous points. The latter approach is viewed from a derivative space perspective, obviating the need to compute the vertical translation of the cubic, thus simplifying the fitting problem. We dieinonstrate the effec...

متن کامل

A Smooth Optimisation Perspective on Training Feedforward Neural Networks

We present a smooth optimisation perspective on training multilayer Feedforward Neural Networks (FNNs) in the supervised learning setting. By characterising the critical point conditions of an FNN based optimisation problem, we identify the conditions to eliminate local optima of the cost function. By studying the Hessian structure of the cost function at the global minima, we develop an approx...

متن کامل

Towards a Mathematical Understanding of the Difficulty in Learning with Feedforward Neural Networks

Despite the recent success of deep neural networks in various applications, designing and training deep neural networks is still among the greatest challenges in the field. In this work, we address the challenge of designing and training feedforward Multilayer Perceptrons (MLPs) from a smooth optimisation perspective. By characterising the critical point conditions of an MLP based loss function...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1998